Power Efficient Instruction Caches for Embedded Systems

نویسندگان

  • Dinesh C. Suresh
  • Walid A. Najjar
  • Jun Yang
چکیده

Instruction caches typically consume 27% of the total power in modern high-end embedded systems. We propose a compiler-managed instruction store architecture (K-store) that places the computation intensive loops in a scratchpad like SRAM memory and allocates the remaining instructions to a regular instruction cache. At runtime, execution is switched dynamically between the instructions in the traditional instruction cache and the ones in the K-store, by inserting jump instructions. The necessary jump instructions add 0.038% on an average to the total dynamic instruction count. We compare the performance and energy consumption of our K-store with that of a conventional instruction cache of equal size. When used in lieu of a 8KB, 4-way associative instruction cache, K-store provides 32% reduction in energy and 7% reduction in execution time. Unlike loop caches, K-store maps the frequent code in a reserved address space and hence, it can switch between the kernel memory and the instruction cache without any noticeable performance penalty.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Interpolative Analytical Cache Model with Application to Performance-Power Design Space Exploration

Caches are known to consume up to half of all system power in embedded processors. Co-optimizing performance and power consumption of the cache subsystem is therefore an important step in the design of embedded systems, especially those employing application specific instruction processors. One of the main difficulty in such attempts is that cache behaviors are application as well as cache-stru...

متن کامل

Efficient Scratchpad Allocation Algorithms for Energy Constrained Embedded Systems

In the context of portable embedded systems, reducing energy is one of the prime objectives. Memories are responsible for a significant percentage of a system’s aggregate energy consumption. Consequently, novel memories as well as novel memory hierarchies are being designed to reduce the energy consumption. Caches and scratchpads are two contrasting variants of memory architectures. The former ...

متن کامل

A Behavior-based Adaptive Access-mode for Low-power Set-associative Caches in Embedded Systems

Modern embedded processors commonly use a set-associative scheme to reduce cache misses. However, a conventional set-associative cache has its drawbacks in terms of power consumption because it has to probe all ways to reduce the access time, although only the matched way is used. The energy spent in accessing the other ways is wasted, and the percentage of such energy will increase as cache as...

متن کامل

A Power Modeling and Estimation Framework for VLIW-based Embedded Systems

In this paper, a framework for modeling and estimating the system-level power consumption for embedded VLIW (Very Long Instruction Word) architectures is proposed. Power macro-models have been developed for the main components of the system: namely the core, the register file, the instruction and data caches. The models have been integrated within a hierarchy of dynamic power estimation engines...

متن کامل

A Buffered Dual-Access-Mode Scheme Designed for Low-Power Highly-Associative Caches

This paper proposes a buffered dual-access-mode cache to reduce power consumption for highly-associative caches in modern embedded systems. The proposed scheme consists of a MRU (most recently used) buffer table and a single cache structure to implement two accessing modes, phased mode and way-prediction mode. The proposed scheme shows better access time and lower power consumption than two pop...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005